- Google is testing AI-generated answers in regular search results.
- The experiments are labeled as "AI overviews" and began about a month ago in the US and the UK.
- Remember when Google's purpose was to compile lists of sites that had the information you want? That was a long time ago.
Sure, you've heard about AI and tech like ChatGPT. An increasing number of you are even trying to use that tech in your everyday life — for fun or school or maybe even work.
For the most part, though, you have to go looking for AI if you want to use it in your daily life. If you want AI-generated results, you need to use the ChatGPT website or app, or click on a specific tab on something like Microsoft's Bing.
Now that's changing, at least for some Google searchers, like me: On Monday morning, using regular old Google, I typed in "truman doctrine vietnam war" and got back an "AI overview" — not something Google found on another site, but something it wrote itself.
I haven't spent a lot of time reading up on US foreign policy post-WWII recently — that's why I asked Google — but I think it was a pretty good answer, actually?
Much more interesting than the answer, though, was how Google came up with it.
I knew that Google was scrambling to catch up in the AI product wars, and was worried about the fate of its core search product in a world where there would be no need to generate links to websites with information — that an AI engine would simply generate the answers you need. And I knew that Google was working on an AI-powered version of search, which you could experiment with yourself.
But I didn't know that Google had started putting this stuff out there for normals, mixed in there with every other search result.
Turns out they have, for about a month. And it may or may not be telling that Google didn't make a big announcement about this — so far, the only place I can find on the web that knows about it is this update from trade journal Search Engine Land.
The upshot: For now, Google is testing out self-generated "AI overviews" for some regular search queries where it thinks the answer might be complex (but answerable). A Google rep told me the AI answers have been deployed in a "very limited percentage of search traffic" in the US and the UK.
And I shouldn't be that surprised, really: Ever since OpenAI started blowing people's minds with ChatGPT in the fall of late 2022, it was clear this tech was coming to search — that was the whole point of Microsoft's big partnership with OpenAI that brought the tech to Bing.
But it's one thing to know that and another to start seeing it in the wild. And to start seeing it as a normal result as opposed to something special.
Google does seem to be handling this well, and addressing lots of the obvious issues that Google-written answers will raise in search.
For instance, it clearly labels the results as AI-generated, and experimental. And a "learn more" link brings you to this well-written explainer that says things like "Generative AI is not a human being. It can't think for itself or feel emotions. It's just great at finding patterns." (And, lower down, "Because generative AI is experimental and a work in progress, it can and will make mistakes.")
And Google also shows its work: If you click on a button in the result, it will surface links to helpful, relevant sites like the National Archives.
Because I am A Responsible Journalist, I also asked Google how Google Magi, which powers these results, does or doesn't interact with the tech powering Google Gemini, its much-maligned "woke AI" chatbot. I didn't get much of a satisfying answer, other than that Google would like you to think of them as separate products.
I'm happy to let other people worry about Google's wokeness (You do hear a lot less hollering about this now, don't you?), though. I am more interested in how these kinds of answers will accelerate the way we already use Google — as a one-stop answers shop, instead of a place that helps you find another place that has your answers.
That trend has been well-documented for years and revolves around Google deciding it would rather have you hang out on Google — through one of its "knowledge panels" or something similar — than go somewhere else to get an answer. Even though Google's business model revolves around selling links to somewhere else.
Right now, Google tries to keep you on Google by surfacing text from a site that purports to answer your query. In theory, if you want to learn more, you can click through. But often, Google's excerpt gives you no incentive to click through. You've got what you need. You're done.
(Meanwhile, I can think of many cases where the information Google cites in the text snippets is wrong — at least in part because Google is relying on highly-ranked web pages whose main purpose isn't to be accurate but to be highly-ranked by Google. Ask Google "What is Jason Kelce's net worth," for instance, and it will highlight a (not remotely helpful) answer from a debit card site that also offers search-bait about Taylor Swift's cat.)
And once you start imagining Google providing fully AI-generated answers like this all the time, with all kinds of queries, things get really interesting.
On the one hand, maybe Google becomes even more valuable because you're no longer even pretending it's a "search engine" — it's just an answer machine. And you go there because you're used to going there.
On the other hand: In that scenario, Google certainly won't be the only answer machine. Which means the whole empire, and the many businesses that depend on the empire (like, um, digital publishers?) goes up for grabs.
You can see why Google is testing this stuff carefully and quietly — and also why it needs to figure out the answer, fast.